4,572 research outputs found

    Human and Object Recognition with a High-resolution tactile sensor

    Get PDF
    This paper 1 describes the use of two artificial intelligence methods for object recognition via pressure images from a high-resolution tactile sensor. Both meth- ods follow the same procedure of feature extraction and posterior classification based on a supervised Supported Vector Machine (SVM). The two approaches differ on how features are extracted: while the first one uses the Speeded-Up Robust Features (SURF) descriptor, the other one employs a pre-trained Deep Convolutional Neural Network (DCNN). Besides, this work shows its applica- tion to object recognition for rescue robotics, by distinguishing between differ- ent body parts and inert objects. The performance analysis of the proposed methods is carried out with an experiment with 5-class non-human and 3-class human classification, providing a comparison in terms of accuracy and compu-tational load. Finally, it is discussed how feature-extraction based on SURF can be obtained up to five times faster compared to DCNN. On the other hand, the accuracy achieved using DCNN-based feature extraction can be 11.67% superior to SURF.Proyecto DPI2015-65186-R European Commission under grant agreement BES-2016-078237. Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Notas florísticas de las islas Baleares (I)

    Get PDF

    Mobile Robot Lab Project to Introduce Engineering Students to Fault Diagnosis in Mechatronic Systems

    Get PDF
    This document is a self-archiving copy of the accepted version of the paper. Please find the final published version in IEEEXplore: http://dx.doi.org/10.1109/TE.2014.2358551This paper proposes lab work for learning fault detection and diagnosis (FDD) in mechatronic systems. These skills are important for engineering education because FDD is a key capability of competitive processes and products. The intended outcome of the lab work is that students become aware of the importance of faulty conditions and learn to design FDD strategies for a real system. To this end, the paper proposes a lab project where students are requested to develop a discrete event dynamic system (DEDS) diagnosis to cope with two faulty conditions in an autonomous mobile robot task. A sample solution is discussed for LEGO Mindstorms NXT robots with LabVIEW. This innovative practice is relevant to higher education engineering courses related to mechatronics, robotics, or DEDS. Results are also given of the application of this strategy as part of a postgraduate course on fault-tolerant mechatronic systems.This work was supported in part by the Spanish CICYT under Project DPI2011-22443

    Control visual de robots manipuladores. Una herramienta para su diseño y aprendizaje

    Get PDF
    En este artículo se describe una nueva herramienta para la simulación y ejecución de sistemas de control basados en imagen en robots manipuladores. Esta herramienta, por un lado, permite ajustar de forma fácil e intuitiva los distintos parámetros implicados en una tarea de control visual y, por otro, puede aplicarse a tareas de docencia para facilitar el aprendizaje de este tipo de sistemas de control visual cada vez más extendidos y con un ámbito de aplicación creciente. Esta herramienta tiene implementados algoritmos clásicos de control basados en imagen así como otros más novedosos basados en momentos que confieren mayor flexibilidad al sistema, asimismo, el carácter docente de la misma permite la fácil integración de nuevos algoritmos de control visual para su evaluación.Este trabajo ha sido parcialmente financiado por OMRON tras la concesión del premio OMRON de “Iniciación a la investigación e innovación en automática” durante la convocatoria de 2003

    Transfer learning or design a custom CNN for tactile object recognition

    Get PDF
    International Workshop on Robotac: New Progress in Tactile Perception and Learning in RoboticsNovel tactile sensors allow treating pressure lectures as standard images due to its highresolution. Therefore, computer vision algorithms such as Convolutional Neural Networks (CNNs) can be used to identify objects in contact. In this work, a high-resolution tactile sensor has been attached to a robotic end-effector to identify objects in contact. Moreover, two CNNs-based approaches have been tested in an experiment of classification of pressure images. These methods include a transfer learning approach using a pre-trained CNN on an RGB images dataset and a custom-made CNN trained from scratch with tactile information. A comparative study of performance between them has been carried out.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. Spanish project DPI2015-65186-R, the European Commission under grant agreement BES-2016-078237, the educational project PIE-118 of the University of Malag

    Biosensors in Rehabilitation and Assistance Robotics

    Get PDF
    Robotic developments in the field of rehabilitation and assistance have seen a significant increase in the last few years [...

    Methods for autonomous wristband placement with a search-and-rescue aerial manipulator

    Get PDF
    A new robotic system for Search And Rescue (SAR) operations based on the automatic wristband placement on the victims’ arm, which may provide identification, beaconing and remote sensor readings for continuous health monitoring. This paper focuses on the development of the automatic target localization and the device placement using an unmanned aerial manipulator. The automatic wrist detection and localization system uses an RGB-D camera and a convolutional neural network based on the region faster method (Faster R-CNN). A lightweight parallel delta manipulator with a large workspace has been built, and a new design of a wristband in the form of a passive detachable gripper, is presented, which under contact, automatically attaches to the human, while disengages from the manipulator. A new trajectory planning method has been used to minimize the torques caused by the external forces during contact, which cause attitude perturbations. Experiments have been done to evaluate the machine learning method for detection and location, and for the assessment of the performance of the trajectory planning method. The results show how the VGG-16 neural network provides a detection accuracy of 67.99%. Moreover, simulation experiments have been done to show that the new trajectories minimize the perturbations to the aerial platform.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Cooperative tasks between humans and robots in industrial environments

    Get PDF
    Collaborative tasks between human operators and robotic manipulators can improve the performance and flexibility of industrial environments. Nevertheless, the safety of humans should always be guaranteed and the behaviour of the robots should be modified when a risk of collision may happen. This paper presents the research that the authors have performed in recent years in order to develop a human-robot interaction system which guarantees human safety by precisely tracking the complete body of the human and by activating safety strategies when the distance between them is too small. This paper not only summarizes the techniques which have been implemented in order to develop this system, but it also shows its application in three real human-robot interaction tasks.The research leading to these results has received funding from the European Communityʹs Seventh Framework Programme (FP7/2007‐2013) under Grant Agreement no. 231640 and the project HANDLE. This research has also been supported by the Spanish Ministry of Education and Science through the research project DPI2011‐22766

    Clasificación de información táctil para la detección de personas

    Get PDF
    Este artículo presenta el diseño de un efector final táctil y la aplicación de técnicas de inteligencia artificial para la detección de personas mediante un brazo manipulador ligero de 6 grados de libertad. Este efector está compuesto por un sensor táctil de alta resolución que permite obtener imágenes de presión. El sistema extrae información háptica en situaciones de catástrofe en las que, generalmente, existe baja visibilidad, con el propósito de evaluar el estado de las víctimas en función de la urgencia de atención (triaje). Se han implementado dos métodos de inteligencia artificial para clasificar imágenes obtenidas por el sensor táctil, distinguiendo los contactos con personas de objetos inertes en escenarios de desastre. Cada método dispone de un extractor de características de imágenes de presión y un clasificador, obtenido por aprendizaje supervisado. Para validar los métodos se han realizado experimentos de clasificación en clases Humano y No humano. Finalmente, se ha realizado una comparación de ambos métodos en términos de porcentaje de acierto y tiempo empleado para la clasificación, en base a los resultados de los experimentos.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
    corecore